AGI

AGI is Already Here

Well-known AI researchers Blaise Agüera y Arcas and Peter Norvig argue in NOEMA that Artificial General Intelligence Is Already Here, mainly because state-of-the-art GPT systems already behave in ways that appear to go beyond their training and show signs of genuinely learning new tasks.

Geoffrey Hinton, one of the godfathers of AI, said: “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future.”

Another AI great, Yoshua Benigo said: “The recent advances suggest that even the future where we know how to build superintelligent AIs (smarter than humans across the board) is closer than most people expected just a year ago.”

AGI is a Long Way From Now

But Gary Marcus responds that Reports of the birth of AGI are greatly exaggerated because, despite the recent progress, machines still fail at tasks that wouldn’t trouble a five-year-old, including five-digit multiplication.

Mind Prison summaries the arguments for What if AGI isn’t coming pointing out that the core ideas behind neural networks haven’t changed in 50 years, that much of the impressive progress we see now is based on massive scaling that is reaching its diminishing returns. As evidence, note that GPT-4 uses 50x the resources of GPT-3.5, but it’s not anywhere near 50x better.


Who Cares

Subprime Intelligence by Ed Zitron

Generative AI’s greatest threat is that it is capable of creating a certain kind of bland, generic content very quickly and cheaply. As I discussed in my last newsletter, media entities are increasingly normalizing their content to please search engine algorithms, and the jobs that involve pooling affiliate links and answering where you can watch the Super Bowl are very much at risk. The normalization of journalism — the consistent point to which many outlets decide to write about the exact same thing — is a weak point that makes every outlet “exploring AI” that bit more scary, but the inevitable outcome is that these models are not reliable enough to actually replace anyone, and those that have experimented with doing so have found themselves deeply embarrassed.

My Thoughts

A key problem for the AGI argument is that nobody can define “intelligence” in a way that agrees with our (unspoken) intuition that it only applies to humans.